-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[None][feat] Update TRTLLM MoE MxFP4 cubins; autotune tileN #8156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
/bot run |
PR_Github #20675 [ run ] triggered by Bot |
PR_Github #20675 [ run ] completed with state |
cfba343
to
af27ef9
Compare
/bot run |
PR_Github #20681 [ run ] triggered by Bot |
PR_Github #20681 [ run ] completed with state |
af27ef9
to
6fd1909
Compare
/bot run |
PR_Github #20694 [ run ] triggered by Bot |
/bot kill |
PR_Github #20735 [ kill ] triggered by Bot |
PR_Github #20694 [ run ] completed with state |
PR_Github #20735 [ kill ] completed with state |
6fd1909
to
e94659b
Compare
fa1783c
to
bd830ef
Compare
/bot run |
PR_Github #20740 [ run ] triggered by Bot |
PR_Github #20740 [ run ] completed with state |
bd830ef
to
729d3b5
Compare
/bot run |
PR_Github #20762 [ run ] triggered by Bot |
/bot kill |
PR_Github #21349 [ run ] completed with state |
@@ -918,7 +918,8 @@ def _create_tensor_like(self, origin_tensor: torch.Tensor, | |||
# TODO: FIXME, sometimes the content of the tensor can affect the performance, like MOE | |||
# One solution is to manituplate the tensor content to make it more like the real data | |||
# during the tuning process. This can by controlled in the preparation phase by the runner. | |||
return torch.zeros(shapes, dtype=dtype, device=device) | |||
# It must not use all zero tensors. Otherwise the timing results become unreliable. | |||
return torch.randint(-5, 5, shapes, device=device).to(dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will suggest using a customizable pre-hook for creating the dummy tensors. Here is the PR for adding this feature:
#6924
We can replace this change with the pre-hook later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The prehook in PR#6924 is useful and we should use it once that is merged. I see a need to initialize data differently under different precisions due to the dynamic range difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding @nekorobov for vis. When #6924 is merged, we may want to adjust how dummy tensors are initialized during the autotune under different precisions. In order to have ideal mix of 0s and 1s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#6924 has been merged. Because using random data instead of all-zero tensors can be considered an improvement for the perf stabilization in general, I think we can also keep the code change here. Thanks a lot for the suggestion~
do_finalize, | ||
) | ||
kernel_runners: List[TunableRunner] = [] | ||
for tile_tokens_dim_ in list(generate_power_of_2_between(start=8, end=128)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this part, I want to confirm:
Currently, for this series of moe runners, tile_tokens_dim
is created according to the value of num_tokens
through the calculate_tile_tokens_dim
method. Thus, when tuning num_tokens
, we should determine the tile_tokens_dim
after a specific num_tokens
is given.
Now we tune each num_tokens
with all the possible tile_tokens_dim
. But this only stands when the two values are independent of each other. Has it been decoupled already?
But I still see calculate_tile_tokens_dim
is used in the fake registration part and some other places, which means tile_tokens_dim
is still a function of num_tokens
. Did I miss anything here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this only stands when the two values are independent of each other. Has it been decoupled already?
But I still see calculate_tile_tokens_dim is used in the fake registration part and some other places, which means tile_tokens_dim is still a function of num_tokens. Did I miss anything here?
The ideal tile_tokens_dim
correlates to num_tokens
and outside factors such as expert distribution. We don't have good heurstics yet. Ideally, we may predict a range of tile_tokens_dim
according to num_tokens
to reduce tuning space.
Is my understanding correct that register_fake()
must return correct output shapes for torch.compile
to work? If so, I see an possible issue--the output shape max_num_padded_tokens
depends on tile_tokens_dim
in current implementation, but we cannot determine tile_tokens_dim
by heuristics because it's tuned by the autotuner. Is there any side effect to nail down tile_tokens_dim=128
in register_fake()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is my understanding correct that register_fake() must return correct output shapes for torch.compile to work?
I believe so. The fake part will do the shape inference, so it is expected to have the exact same shape as the custom op implementation.
A possible solution might be always using the max tile value (like 128) to pad the tensor, which will be easier for the fake part. Not sure whether it will be suboptimal or not.
And I wonder how the following op uses this padded tensor, as its shape has changed. Is there any extra information delivered?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When autotuner is enabled, the true tile_tokens_dim
can be obtained from the return tuple (tile_tokens_dim, tactic)
of choose_one()
(which I just added today in de91deb). Or, when autotuner is disabled, estimate the tileN using the same heuristic. I haven't implemented both. Does it only affect torch.compile
? If so, I'd like to have a follow-up PR for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think only torch.compile
will be affected.
kernel_runners: List[TunableRunner] = [] | ||
for tile_tokens_dim_ in list(generate_power_of_2_between(start=8, end=128)): | ||
kernel_runners += [ | ||
FP4BlockScaleMoERunner( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, creating multiple runners is for tuning the tileN, and tile_tokens_dim
can be independent of the num_tokens
(this is important), then I suggest using tactics to represent them instead of different runners. Because we prefer using different runners to represent a different backend or implementation, and different tactics in a single runner to represent a different kernel config.
A proper modification can be:
- 'Encode` it into different tactics and prepare all the CPP runner instances in a single Python runner.
- In 'get_valid_tactics', create a production between tactic list and possible tileN values. Return a tuple
(tile_tokens_dim, tactic)
instead of a standalone tactic value to indicate the corresponding runner used for that tile value. - In
forward
, invoke the correct runner according to thetile_tokens_dim
passed within the tactic tuple.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds reasonable. Let me revise it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added in de91deb
cca9b29
to
a9b4191
Compare
/bot run |
PR_Github #21492 [ run ] triggered by Bot |
PR_Github #21492 [ run ] completed with state |
@@ -918,7 +918,8 @@ def _create_tensor_like(self, origin_tensor: torch.Tensor, | |||
# TODO: FIXME, sometimes the content of the tensor can affect the performance, like MOE | |||
# One solution is to manituplate the tensor content to make it more like the real data | |||
# during the tuning process. This can by controlled in the preparation phase by the runner. | |||
return torch.zeros(shapes, dtype=dtype, device=device) | |||
# It must not use all zero tensors. Otherwise the timing results become unreliable. | |||
return torch.randint(-5, 5, shapes, device=device).to(dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#6924 has been merged. Because using random data instead of all-zero tensors can be considered an improvement for the perf stabilization in general, I think we can also keep the code change here. Thanks a lot for the suggestion~
{ | ||
// returns (tileN, config) | ||
std::vector<std::vector<int64_t>> tactics; | ||
for (auto& [tileN, runner] : mRunners) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think we can do it directly in the Python code because it might be simpler. But CPP here is also fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nekorobov what's your opinion on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not have strong opinion. Whatever is easier and faster in your opinion :) Since CPP is already implemented I am fine having it this way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have strong preference as well. Let's keep it as-is (cpp) so python side is cleaner.
) | ||
# FIXME: temporarily disable tuning multiple runners due to kernel failure in test: | ||
# python3 -m pytest tests/integration/defs/accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_fp8_blockscale[throughput_mtp_trtllm] | ||
tile_tokens_dim = calculate_tile_tokens_dim(hidden_states.shape[0], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can tactics be shared by these runners with different values of tile_tokens_dim
? Because we calculate this value only according to the input number of tokens. During the warm-up phase, it is set to be the maximum number of tokens. Then the tuning process will always rely on the runner with that value. Is this expected? Some potential issues might be:
- Using a runner with the large tile to tune a small input problem size, which might be suboptimal.
- Using tactics across different tile values. Some runners might not support others' tactics. Not sure if the test failures mentioned in the comment note is caused by this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think autotuner tunes the "dynamic dimension" of the input tensors from 1, 2, 4, ..., up to some large max_batch size during the warm-up phase. So all shapes should be covered by autotuner.
Tactics are not interchangeable between runners. So for this MoE precision, we don't tune tileN due to an issue I met earlier. As result, the autotuner has no visibility into who is the best runner. It only sees the best tactic. calculate_tile_tokens_dim()
fills the gap and chooses the right runner because the result is deterministic.
a9b4191
to
96ed965
Compare
…rtllm batchedGemm config.json Signed-off-by: Anthony Chang <[email protected]>
…input for better result Signed-off-by: Anthony Chang <[email protected]>
…face Signed-off-by: Anthony Chang <[email protected]>
Signed-off-by: Anthony Chang <[email protected]>
Signed-off-by: Anthony Chang <[email protected]>
Signed-off-by: Anthony Chang <[email protected]>
Signed-off-by: Anthony Chang <[email protected]>
Signed-off-by: Anthony Chang <[email protected]>
Signed-off-by: Anthony Chang <[email protected]>
96ed965
to
3403a62
Compare
/bot run |
PR_Github #21572 [ run ] triggered by Bot |
PR_Github #21572 [ run ] completed with state |
/bot run |
PR_Github #21630 [ run ] triggered by Bot |
PR_Github #21630 [ run ] completed with state |
/bot run Rerun due to seemingly CI issue
|
PR_Github #21657 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks a lot for the effort~
@coderabbitai summary
Description
randint(-5, 5)
appears to report benchmark result more accurately thanrandn()
.GPT-OSS-120b TP1
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.